responsible ai dashboard
Microsoft AI news: Making AI easier, simpler, more responsible
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Today is a big day for AI announcements from Microsoft, both from this week's Build conference and beyond. But one common theme bubbles over consistently: For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible and, most of all, responsible. Responsible AI is actually at the heart of a lot of today's Build news, John Montgomery, corporate vice president of Azure AI, told VentureBeat. Most notable is Azure Machine Learning's preview of a responsible AI dashboard, which brings together capabilities in use over the past 18 months, such as data explorer, model interpretability, error analysis, counterfactual and causal inference analysis, into a single view.
- Europe > United Kingdom (0.15)
- Oceania > Australia (0.05)
Building AI responsibly from research to practice
The speed at which artificial intelligence (AI) technologies have improved in competency and moved from the lab into mainstream applications has surprised even the most seasoned AI experts. Despite the progress, the practice of AI is still new and hard to do. This creates an interesting dynamic: AI practitioners are learning new AI skills as they're building AI applications. There are many opportunities to learn and improve. Microsoft's AI principles call out the aspirations of designing our systems in accordance with goals of fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
Microsoft unveils responsible AI guidelines and dashboard
Microsoft says it wants to make it easier for organizations to use and build AI technology responsibly. During its "Put Responsible AI into Practice" digital event on Dec.7, the tech giant, with Boston Consulting Group, released 10 guidelines that product leaders can use to implement AI responsibly, without bias and with visibility into the intentions of AI and machine learning algorithms. Enterprises can use the guidelines before, during, and after of the process of building AI models. Microsoft outlines the guidelines in a three-step framework that starts with using transparent processes to assess and prepare the model and weigh potential risks and benefits. The next step is design, build, and document.
New resources and tools to enable product leaders to implement AI responsibly
As AI becomes more deeply embedded in our everyday lives, it is incumbent upon all of us to be thoughtful and responsible in how we apply it to benefit people and society. A principled approach to responsible AI will be essential for every organization as this technology matures. As technical and product leaders look to adopt responsible AI practices and tools, there are several challenges including identifying the approach that is best suited to their organizations, products and market. Today, at our Azure event, Put Responsible AI into Practice, we are pleased to share new resources and tools to support customers on this journey, including guidelines for product leaders co-developed by Microsoft and Boston Consulting Group (BCG). While these guidelines are separate from Microsoft's own Responsible AI principles and processes, they are intended to provide guidance for responsible AI development through the product lifecycle.